29 research outputs found

    Natural Visualizations

    Get PDF
    This paper demonstrates the prevalence of a shared characteristic between visualizations and images of nature. We have analyzed visualization competitions and user studies of visualizations and found that the more preferred, better performing visualizations exhibit more natural characteristics. Due to our brain being wired to perceive natural images [SO01], testing a visualization for properties similar to those of natural images can help show how well our brain is capable of absorbing the data. In turn, a metric that finds a visualization’s similarity to a natural image may help determine the effectiveness of that visualization. We have found that the results of comparing the sizes and distribution of the objects in a visualization with those of natural standards strongly correlate to one’s preference of that visualization

    Layout of Multiple Views for Volume Visualization: A User Study

    Get PDF
    Abstract. Volume visualizations can have drastically different appearances when viewed using a variety of transfer functions. A problem then occurs in trying to organize many different views on one screen. We conducted a user study of four layout techniques for these multiple views. We timed participants as they separated different aspects of volume data for both time-invariant and time-variant data using one of four different layout schemes. The layout technique had no impact on performance when used with time-invariant data. With time-variant data, however, the multiple view layouts all resulted in better times than did a single view interface. Surprisingly, different layout techniques for multiple views resulted in no noticeable difference in user performance. In this paper, we describe our study and present the results, which could be used in the design of future volume visualization software to improve the productivity of the scientists who use it

    Multiple Uncertainties in Time-Variant Cosmological Particle Data

    Get PDF
    Though the mediums for visualization are limited, the potential dimensions of a dataset are not. In many areas of scientific study, understanding the correlations between those dimensions and their uncertainties is pivotal to mining useful information from a dataset. Obtaining this insight can necessitate visualizing the many relationships among temporal, spatial, and other dimensionalities of data and its uncertainties. We utilize multiple views for interactive dataset exploration and selection of important features, and we apply those techniques to the unique challenges of cosmological particle datasets. We show how interactivity and incorporation of multiple visualization techniques help overcome the problem of limited visualization dimensions and allow many types of uncertainty to be seen in correlation with other variables

    Four types of ensemble coding in data visualizations

    Get PDF
    Ensemble coding supports rapid extraction of visual statistics about distributed visual information. Researchers typically study this ability with the goal of drawing conclusions about how such coding extracts information from natural scenes. Here we argue that a second domain can serve as another strong inspiration for understanding ensemble coding: graphs, maps, and other visual presentations of data. Data visualizations allow observers to leverage their ability to perform visual ensemble statistics on distributions of spatial or featural visual information to estimate actual statistics on data. We survey the types of visual statistical tasks that occur within data visualizations across everyday examples, such as scatterplots, and more specialized images, such as weather maps or depictions of patterns in text. We divide these tasks into four categories: identification of sets of values, summarization across those values, segmentation of collections, and estimation of structure. We point to unanswered questions for each category and give examples of such cross-pollination in the current literature. Increased collaboration between the data visualization and perceptual psychology research communities can inspire new solutions to challenges in visualization while simultaneously exposing unsolved problems in perception research

    The Set Salience Bias in Ensemble Decision-Making

    No full text
    International audienc

    A Succinct Intro to R

    No full text
    International audienceThis book is a short introduction to the R language. It covers the basics of R that are not covered by analysis and visualization guides like R for Data Science. Consider it a quick way to get up to speed on R before diving into the analysis and visualization aspects. This example-focused guide assumes you are familiar with programming concepts but want to learn the R language. It offers more examples than an “R cheat sheet” without the verbosity of a language spec or an introduction to programming. http://r-guide.steveharoz.co

    Open Practices in Visualization Research

    No full text
    International audienceTwo fundamental tenants of scientific research are that it can be scrutinized and built-upon. Both require that the collected data and supporting materials be shared, so others can examine, reuse, and extend them. Assessing the accessibility of these components and the paper itself can serve as a proxy for the reliability, replicability, and applicability of a field’s research. In this paper, I describe the current state of openness in visualization research and provide suggestions for authors, reviewers, and editors to improve open practices in the field. A free copy of this paper, the collected data, and the source code are available at https://osf.io/qf9na

    Comparison of Preregistration Platforms

    No full text
    Preregistration can force researchers to front-load a lot of decision-making to an early stage of a project. Choosing which preregistration platform to use must be therefore be one of those early decisions, and because a preregistration cannot be moved, that choice is permanent. This article aims to help researchers who are already interested in preregistration choose a platform by clarifying differences between them. Preregistration criteria and features are explained and analyzed for sites that cater to a broad range of research fields, including: GitHub, AsPredicted, Zenodo, the Open Science Framework (OSF), and an “open-ended” variant of OSF. While a private prespecification document can help mitigate self-deception, this guide considers publicly shared preregistrations that aim to improve credibility. It therefore defines three of the criteria (a timestamp, a registry, and persistence) as a bare minimum for a valid and reliable preregistration. GitHub and AsPredicted fail to meet all three. Zenodo and OSF meet the basic criteria and vary in which additional features they offer

    A Comment on "Evaluation of Sampling Methods for Scatterplots"

    No full text
    "Evaluation of Sampling Methods for Scatterplots" claims that people perform dot-density comparisons best when using random sampling rather than other sampling approaches. This claim and other core conclusions of the article are not supported by the article’s empirical evidence. The article’s reported results and figures do not meet its own stated threshold of statistical significance, and the analyses are ill-suited for the research questions. Some of these issues are present in the article and could have been spotted by reviewers, whereas other issues were only noticeable because I requested the code and data after publication (which a reviewer would have been prohibited from demanding during the IEEE TVCG review process). A reanalysis calls into question whether any generalizable claims can be made from these results. This comment, its analysis code, and the original article’s data are freely available at http://osf.io/hsujr
    corecore